Intro

Research on pragmatic inference has to date paid relatively little attention to the effects of pragmatic reasoning on common ground beliefs, or background world knowledge, although revision of said beliefs is a strategy that listeners may use to interpret pragmatically unexpected utterances. Here we present a Rational Speech Act (RSA) model (Frank & Goodman, 2012; Goodman & Stuhlmüller, 2013) of how background beliefs about activity habituality may be updated upon encountering informationally redundant descriptions of said activities. Intuitively, one expect that upon hearing something like “John went shopping. He paid the cashier!”, a comprehender may conclude that because paying the cashier during shopping is an entirely expected activity which doesn’t typically warrant mention, John must not be a habitual cashier-payer.

Additionally, we address an issue that arises when pragmatic reasoning is partially dependent on the possibility that a message may have been misheard or not attended to. Longer or otherwise more prominent utterances should have a better chance of being accurately perceived or attended to than less prominent, but semantically meaning-equivalent utterances (Wilson & Sperber, 2004), which may either generate or strengthen pragmatic inferences in response to those utterances (at the very least, if an utterance is not attended to, it will not generate an inference). Bergen & Goodman (2015) and Bergen (2016) demonstrate that the standard RSA model is unable to generate different inferences, or inferences of different strengths, for utterances with the same semantic meaning. Similar to these authors, we build a model which incorporates the notion that more prominent utterances should have a better chance of being attended to (or recalled accurately), and should therefore generate stronger inferences.

Data

Here, we consider utterances such as the following:

  1. John went shopping. He paid the cashier{!|.}
  2. John went shopping.

In (1), saying he paid the cashier is informationally redundant, as cashier-paying is in context a very predictable activity, which should automatically be assumed simply given the mention of shopping (Bower, Black, & Turner, 1979). The predicted, and, in the case of (a) and (b), empirically validated (Kravtchenko & Demberg, 2015, ???) effects associated with the use and comprehension of such utterances are:

  1. As the utterance paid the cashier is informationally redundant, at face value it is pragmatically odd. Comprehenders resolve the pragmatic anomaly by determining that cashier-paying is not, in fact, typical for this individual and in this context, contrary to their prior beliefs.
  2. Expending more effort on communicating an informationally redundant utterance, for example by using exclamatory prosody, should strengthen the inference, as increased articulatory effort (and increased attempts at attention-grabbing) reflect greater speaker intent to transmit precisely this message to the listener.
  3. Speakers should preferentially use more attentionally prominent utterances to transmit particularly unusual or unexpected meanings, even when doing so is relatively costly.

First, we very briefly present our empirical data, which we later feed into a series of models (standard RSA, RSA with joint reasoning, noisy-channel RSA).

Empirical priors

Prior beliefs regarding the likelihood of various activities occurring were collected empirically - by measuring the habituality (likelihood of occurrence) of the activity. This was done by asking comprehenders to rate, on a scale of 0 (never) to 100 (always), how often they thought someone engaged in a particular activity, when engaged in a certain event sequence (script) which, by common knowledge, habitually includes said activity:

  • “How often do you think John usually pays the cashier, when grocery shopping?”

This question was asked after presenting comprehenders with either a neutral context mentioning a certain script, or a “wonky” context mentioning said script. The “wonky” context either hinted strongly, or explicitly stated, that the individual in question did not habitually engage in the usually-habitual activity. An example can be seen below:

  1. neutral: “John often goes to the grocery store around the corner from his apartment.”
  2. wonky: “John is typically broke, and doesn’t usually pay when he goes to the grocery store.”

Additionally, as a control, and to use as a comparison to the “wonky” condition above, we also collected ratings for “non-predictable” activities, which are consistent with the script, but not expected – for example, buying apples when grocery shopping:

  • “How often do you think John usually gets apples, when grocery shopping?”

On this and a separate page, I plot the distributions of ratings we collected from participants, and fit beta (probability) distributions to each condition. These plots show how likely any particular activity habituality is: since we don’t know precisely how habitual any given activity is (or is believed to be), we have the following ranges of estimates collected from our participants.

On this page I will only look at “typical world-predictable activity” activities - i.e., where an overt utterance is informationally redundant. I will look at activities which are not predictable, either by virtue of linguistic context or prior belief, on a separate page.

It is important to note, however, that an activity rated (for example) at 50%, on a never to always scale, is not necessarily one that participants believe occurs 50% of the time. These should therefore be considered as rough relative stimates of activity habituality.

Typical Context - Predictable Activity

Context: John often goes to the grocery store around the corner from his apartment.
Question: How often do you think John usually pays the cashier, when grocery shopping?

Here, it is evident that the vast majority of comprehenders believe that John is a typical cashier-payer.

# scale to remove 0 and 1 values (add/subtract 0.001 from edges)
prior_typ_pred_scaled <- scale_ratings(prior_typ_pred$rating)

# fit beta distribution by maximum likelihood estimation
fit.prior_typ_pred <- fitdist(prior_typ_pred_scaled, "beta", method="mle")
## Fitting of the distribution ' beta ' by maximum likelihood 
## Parameters : 
##         estimate Std. Error
## shape1 2.1960256 0.08006598
## shape2 0.4108572 0.01033672
## Loglikelihood:  2420.291   AIC:  -4836.583   BIC:  -4825.283 
## Correlation matrix:
##           shape1    shape2
## shape1 1.0000000 0.5580665
## shape2 0.5580665 1.0000000

General Model Setup

Possible Utterances: (in roughly increasing order of effort)

Possible States:

Possible Habitualities:

RSA

Utterance: \(u\) (the particular utterance, or lack thereof, uttered by the speaker)
Current activity state: \(s\) (did the activity occur during the most current activity sequence in question)

The baseline RSA model is inherently unequipped to model changes in beliefs about the world that are independent of the current activity state (\(s\)):

Given that the literal meaning of paid the cashier (\([\![u]\!]\)) does not communicate anything about activity habituality directly, the standard RSA model can predict only that the cashier was definitely paid in the case of utterances (1), and that they may or may not have been paid in the case of utterance (2). Activity habituality by itself cannot be reasoned about in the standard RSA, since all utterances are at face value equally consistent with all possible habitualities.

Below is the webppl code for this model, which can be run either on webppl.org, or locally.

// Current activity state
// the activity being described at this point in time either took 
// place, or didn't
var state = ["happened","didn't happen"]

// State priors
// assume highly predictable/habitual activity
// with a 90% chance of occurring, for purpose of demonstration
var statePrior = function() {
  categorical([0.9, 0.1], state) 
}

// Utterances
// choice of 4 utterances; prosody not modeled separately as affects 
// only one variant
var utterance = ['oh yeah','exclamation','plain','(...)']

// Utterance cost
// (rough estimate of number of constituents + extra for
// articulatory effort)
var cost = {
  "oh yeah": 5,
  "exclamation": 4,
  "plain": 3,
  "(...)": 0
}

// Meaning
// literal meaning of all overt utterances is that activity happened.
// literal meaning of null "utterance" is consistent with all activity states
var meaning = function(utt,state) {
  utt === "oh yeah" ? state === "happened" : 
  utt === "exclamation" ? state === "happened" : 
  utt === "plain" ? state === "happened" : 
  utt === "(...)" ? true :
  true
}

// Speaker optimality
var alpha = 20

// Utterance prior
// utterance prior determined by utterance cost, as defined above
var utterancePrior = function() {
  var uttProbs = map(function(u) {return Math.exp(-cost[u]) }, utterance)
  return categorical(uttProbs, utterance)
}

// Literal listener
var literalListener = mem(function(utterance) {
  return Infer({model: function() {
    var state = statePrior()
    condition(meaning(utterance,state))
    return state
  }})
})

// Speaker
var speaker = mem(function(state) {
  return Infer({model: function() {
    var utterance = utterancePrior()
    factor(alpha * literalListener(utterance).score(state))
    return utterance
  }})
})

// Pragmatic listener
var pragmaticListener = function(utterance) {
  return Infer({model: function() {
    var state = statePrior()
    observe(speaker(state),utterance)
    return state
  }})
}

// literalListener("(...)")
// literalListener("plain")
// literalListener("exclamation")
// literalListener("oh yeah")

// speaker("happened")
// speaker("didn't happen")

// pragmaticListener("(...)")
// pragmaticListener("plain")
// pragmaticListener("exclamation")
// pragmaticListener("oh yeah")

Results

Here, it can clearly be seen that after “hearing” a null utterance (“(…)”), comprehenders preferentially conclude that the activity happened (they are not certain, but it is highly likely, given that we assume a high-habituality activity).

Overt utterances are uniformly consistent only with the interpretation that the activity happened.


As expected, if the activity happened, speakers preferentially say nothing, and only rarely use high-effort utterances.


As expected, pragmatic listeners infer that if an activity went unmentioned, it is slightly more likely (compared to baseline) to not have happened, given that the speaker has multiple viable alternatives to definitely communicate that it did happen. However, they still overwhelmingly conclude that it is far more likely that the activity occurred, than that it did not.

Overall, although this model behaves as expected, it does not tell us anything interesting, and we find out nothing about how habituality estimates might change as a result of hearing the utterance. The slightly lowered likelihood of the activity having occurred, in the case of the “(…)” utterance “heard” by pragmatic listeners, does however hint at changes in habituality estimates, based on the utterance the speaker chose.

hRSA

A standard RSA model which incorporates joint reasoning (Degen, Tessler, & Goodman, 2015; Goodman & Frank, 2016) can model both changes in beliefs about the world, and changes in beliefs about the current activity state. Listeners can explicitly reason about the joint likelihood of a given habituality (\(h\)), and a given activity state (\(s\)), given a particular utterance (\(u\)):

The literal listener does not reason about activity habituality, as this is not a part of the literal interpretation.

Here, we can feed our empirical priors directly into the model, where the likelihood of the activity occurring is conditional on the habituality. Whether a given activity occurred, or not (\(s\)), then, is simply a Bernoulli trial with \(p=h\).

var beta_high_a = 2.19602561493963
var beta_high_b = 0.410857186535822
var beta_low_a = 0.59051473988806
var beta_low_b = 0.599422405762914


// Is this a high-habit activity (paying the cashier when shopping) or a
// low-habit activity (bying apples, paying cashier as habitual non-payer)?
// (mostly for demonstration)
var activity = ["low-habit","high-habit"]

// Assume uniform likelihood / 
var activityPrior = function() {
  categorical([0.5, 0.5], activity)
}

// Habituality priors
// beta distributions fit to empirical priors
var habitualityPrior = function(activity) {
  activity === "high-habit" ? sample(Beta({a: beta_high_a, b: beta_high_b})) :
  activity === "low-habit" ? sample(Beta({a: beta_low_a, b: beta_low_b})) :
  true
}

// Current activity state
// the activity being described at this point in time either took place, or didn't
var state = ["happened","didn't happen"]

// State priors
// whether the activity took place is dependent on prior likelihood
var statePrior = function(habituality) {
  flip(habituality) ? state[0] : state[1]
}

// Utterances (intended)
// choice of 4 utterances; prosody not modeled separately as affects only one variant
var utterance = ['oh yeah','exclamation','plain','(...)']

// Utterance cost
// (rough estimate of number of constituents + extra for articulatory effort)
var cost = {
  "oh yeah": 5,
  "exclamation": 4,
  "plain": 3,
  "(...)": 0
}

// Meaning
// literal meaning of all overt utterances is that activity happened.
// literal meaning of null utterance is consistent with all activity states
var meaning = function(utt,state) {
  utt === "oh yeah" ? state === "happened" : 
  utt === "exclamation" ? state === "happened" : 
  utt === "plain" ? state === "happened" : 
  utt === "(...)" ? true :
  true
}

// Speaker optimality
var alpha = 20

// Utterance prior
// utterance prior determined by utterance cost, as defined above
var utterancePrior = function() {
  var uttProbs = map(function(u) {return Math.exp(-cost[u]) }, utterance)
  return categorical(uttProbs, utterance)
}

// Literal listener
var literalListener = mem(function(utterance, habituality) {
  return Infer({model: function() {
    var state = statePrior(habituality)
    condition(meaning(utterance,state))
    return state
  }})
})

// Speaker
var speaker = mem(function(state, habituality) {
  return Infer({model: function() {
    var utterance = utterancePrior()
    factor(alpha * literalListener(utterance, habituality).score(state))
    return utterance
  }})
})

// Pragmatic listener
// assume high-habit activity for demonstration
var pragmaticListener = function(utterance, info) {
  return Infer({method: "rejection", samples: 5000, model: function() {
    var activity = "high-habit"
    var habituality = habitualityPrior(activity)
    var state = statePrior(habituality)
    observe(speaker(state, habituality),utterance)
    info === "both" ? {state: state, habituality: habituality} :
    info === "state" ? state :
    info === "habituality" ? habituality :
    true
  }})
}


// literalListener("(...)",0.95)
// literalListener("(...)",0.5)
// literalListener("(...)",0.05)

// speaker("happened",0.95)
// speaker("happened",0.5)
// speaker("happened",0.05)

// pragmaticListener("(...)","both")
// pragmaticListener("plain","both")
// pragmaticListener("exclamation","both")
// pragmaticListener("oh yeah","both")

Results

Here, one can see that the literal listener interprets highly habitual activities as having almost certainly occurred, moderately habitual activities as having perhaps occurred, and non-habitual activities as having almost certainly not occurred.


The pragmatic speaker is most likely to not describe a highly habitual activity explicitly, as expected, with relatively effortful utterances very unlikely.

In the case of moderately habitual activities, the speaker almost always chooses to describe the activity explicitly, preferring the least effortful utterance. To note, for moderately predictable activities, it’s unclear whether it’s really likely that the activity will always be mentioned.

In the case of very unhabitual activities, the speaker most often describes the activity explicitly, again preferring the least effortful utterance.

Of note here is that this model does not capture the intuition that speakers should choose more effortful utterances for particularly unhabitual activities.


The pragmatic listener interprets unmentioned (high-predictability) activities as highly habitual, as expected.

Explicitly mentioned activities are all interpreted as roughly equally unhabitual, contrary to predictions.

This model correctly captures predicted effect (a): if an activity is described explicitly, the habituality is likely to be low. What it does not, however, capture is that there is no possibility of simply leveraging utterance costs to capture effects (b) and (c) above.

There are three possible ways, in this model, of describing the activity explicitly: “plain,” with exclamatory prosody, and with a discourse marker signifying the utterance’s relevance to the discourse/listener, with the latter two more costly. The two more attentionally prominent utterances will never be of any advantage to the literal listener, in terms of effectively communicating the current world state. Likewise it is of no advantage to the speaker, either in terms of likelihood of accurate message transmission to the listener, or the speaker’s presumed goal to conserve articulatory effort. As a consequence, the pragmatic listener will not infer that the more effortful utterance carries any special meaning, compared to the “plain” utterance.

Noisy channel hRSA

Standard RSA models are unable to derive pragmatic inferences of different strengths, given semantically meaning-equivalent utterances, as mathematically proven in Bergen (2016, pp. 35–37). Simply assigning different costs to identically meaningful utterances, for instance, does not allow one to capture the intuitive effects of increased effort, or likelihood of accurate message transmission, on utterance choice or listener comprehension. Standard RSA models therefore cannot model any of the effects that increased utterance prominance may have on utterance choice or comprehension.

In order to capture effects (b) and (c) above (stronger inferences for more effortful utterances; more effortful utterances for unusual meanings), it is necessary to assign some communicative benefit to the more costly utterances, in terms of grabbing attention and/or facilitating recall, already active at the literal listener level. It is in fact plausible that comprehenders cannot accurately recall whether an activity has been explicitly mentioned, or not, as it has been shown that readers often cannot recall whether elements in a stereotyped activity sequence were explicitly mentioned (Bower et al., 1979). Further, informational redundancy, even at the multi-word level, in part has the purpose of ensuring that listeners attend to and accurately recall relevant information (Baker, Gill, & Cassell, 2008; Walker, 1993).

The noisy channel RSA model proposed by Bergen & Goodman (2015), with fairly minimal modification, successfully captures this intuition, although in this case we consider the likelihood that an utterance is attended to and stored in memory, rather than simply misheard:

In this model, it’s assumed that every utterance has a non-trivial likelihood of not being actively attended to, and being mistaken for or mis-recalled as something akin to its “perceptual neighbors” (as well as a very small chance of being mis-recalled as a non-neighboring utterance). The “plain” utterance is considered to be perceptually neighboring to the two more effortful utterances, which are further perceptually neighboring to each other. The “null” utterance is relatively perceptually neighboring to the “plain” utterance, although this relationship is possibly asymmetrical, as comprehenders may be more likely to misremember highly typical activities as having been mentioned, than the other way around (this is, however, not critical for the functioning of this model).

It is, however, also possible that this machinery needs to be further modified to account for the fact that these utterances are only very loosely “neighbors,” and misperceiving a signal as something substantially different is somewhat less likely when talking about complex multi-word utterances.

Below is the webppl code for this model, which can be run either on webppl.org, or locally.

var beta_high_a = 2.19602561493963
var beta_high_b = 0.410857186535822
var beta_low_a = 0.59051473988806
var beta_low_b = 0.599422405762914


// Is this a high-habit activity (paying the cashier when shopping) or a
// low-habit activity (bying apples, paying cashier as habitual non-payer)?
// (mostly for demonstration)
var activity = ["low-habit","high-habit"]

// Assume uniform likelihood / 
var activityPrior = function() {
  categorical([0.5, 0.5], activity)
}

// Habituality priors
// beta distributions fit to empirical priors
var habitualityPrior = function(activity) {
  activity === "high-habit" ? sample(Beta({a: beta_high_a, b: beta_high_b})) :
  activity === "low-habit" ? sample(Beta({a: beta_low_a, b: beta_low_b})) :
  true
}

// Current activity state
// the activity being described at this point in time either took place, or didn't
var state = ["happened","didn't happen"]

// State priors
// whether the activity took place is dependent on prior likelihood
var statePrior = function(habituality) {
  flip(habituality) ? state[0] : state[1]
}

// Utterances (intended)
// choice of 4 utterances; prosody not modeled separately as affects only one variant
var utterance_i = ['oh yeah','exclamation','plain','(...)']

// Utterance cost
// (rough estimate of number of constituents + extra for articulatory effort)
var cost = {
  "oh yeah": 5,
  "exclamation": 4,
  "plain": 3,
  "(...)": 0
}

// Utterances (remembered/attended to)
// assume that utterance most likely to be recalled as itself, but also has
// non-trivial likelihood of being recalled as 'perceptually neighboring' utterance
// (with markers for plain utterance; vice versa; no utterance for "plain"
// utterance; and vice versa).
// alternately, this can be conceptualized as listener's belief of what the speaker
// *intended* to say - but unclear if below is best way to represent that
var utterance_r = function(utterance) {
  utterance === "oh yeah" ? categorical([0.9,0.1,0.1,0.0001], utterance_i) :
  utterance === "exclamation" ? categorical([0.1,0.9,0.1,0.0001], utterance_i) :
  utterance === "plain" ? categorical([0.1,0.1,0.9,0.05], utterance_i) :
  utterance === "(...)" ? categorical([0.0001,0.0001,0.1,0.9], utterance_i) :
  true
}

// Meaning
// literal meaning of all overt utterances is that activity happened.
// literal meaning of null utterance is consistent with all activity states
var meaning = function(utt,state) {
  utt === "oh yeah" ? state === "happened" : 
  utt === "exclamation" ? state === "happened" : 
  utt === "plain" ? state === "happened" : 
  utt === "(...)" ? true :
  true
}

// Speaker optimality
var alpha = 20

// Utterance prior
// utterance prior determined by utterance cost, as defined above
var utterancePrior = function() {
  var uttProbs = map(function(u) {return Math.exp(-cost[u]) }, utterance_i)
  return categorical(uttProbs, utterance_i)
}

// Literal listener
var literalListener = mem(function(utterance, habituality) {
  return Infer({model: function() {
    var state = statePrior(habituality)
    var remembered = utterance_r(utterance)
    condition(meaning(remembered,state))
    return state
  }})
})

// Speaker
var speaker = mem(function(state, habituality) {
  return Infer({model: function() {
    var utterance = utterancePrior()
    var remembered = utterance_r(utterance)
    factor(alpha * literalListener(remembered, habituality).score(state))
    return utterance
  }})
})

// Pragmatic listener
// assume high-habit activity for demonstration
var pragmaticListener = function(utterance, info) {
  return Infer({method: "rejection", samples: 5000, model: function() {
    var activity = "high-habit"
    var habituality = habitualityPrior(activity)
    var state = statePrior(habituality)
    var remembered = utterance_r(utterance)
    observe(speaker(state, habituality),remembered)
    info === "both" ? {state: state, habituality: habituality} :
    info === "state" ? state :
    info === "habituality" ? habituality :
    true
  }})
}


// literalListener("(...)",0.95)
// literalListener("(...)",0.5)
// literalListener("(...)",0.05)

// speaker("happened",0.95)
// speaker("happened",0.5)
// speaker("happened",0.05)

// pragmaticListener("(...)","both")
// pragmaticListener("plain","both")
// pragmaticListener("exclamation","both")
// pragmaticListener("oh yeah","both")

Below are the results of this model:

Results

The literal listener, as expected, perceives highly typical activities as having most likely happened, and so forth. As can be seen below, the are slightly more biased towards assuming that the activity occurred than that it did not, given the habituality of the activity. This is due to a relatively elevated likelihood that an utterance will be remembered as having been mentioned - it is unclear right now if this is justified, or whether the model will need to be altered.


For high-habituality activites, as before, speakers are very unlikely to describe the activity explicitly - and if they do, they tend towards less effortful utterances.

Moderately habitual activities are only moderately likely to be mentioned, and again speakers gravitate towards less effortful utterances. This is consistent with expectations, as moderately predictable activities are less likely to be assumed to have not occurred - it is therefore not quite as important to grab the listener’s attention to ensure that they do, in fact, believe that the activity took place.

Non-habitual activities are virtually always described explicitly, and as can be seen, speakers prefer a higher-effort utterance that is less likely to not be attended to, or be misrecalled as a “null” utterance. This matches our predicted effect (c).


As can be seen here, pragmatic listeners perceive activities described overtly as less habitual, and furthermore, perceive the higher-effort utterances as (slightly) less habitual than the lower-effort utterance, matching the predicted effect (a).

Further, the lowest-effort “plain” utterance is slightly likely to be remembered as not having been uttered, with a very small chance of the same for higher-effort utterances.

Overall, this model qualitatively captures all of our predicted (including two of our empirically validated) effects, using machinery that has been established in RSA models of other pragmatic phenomena.

Comparison to empirical results

Overall, the results of the final model are a fairly close match, at least qualitatively, to those empirically measured in our experiments. To demonstrate this, I will plot the primary results of interest side-by-side. Currently I use the “typical-unpredictable” post-utterance ratings as a comparison for the “null” utterance, but this measure will likely be replaced by a measure of activity habituality estimates collected after an utterance that mentions shopping, but does not talk about the activity in question (or any other activity, shopping aside, that would imply paying).

The distributions tails in the empirical data are fatter, and there is a hint of bimodality around the 50% mark. Otherwise, qualitatively the habituality densities match up fairly well, and the mean habitualities are qualitatively and numerically similar.

At this time, it is likely, however, that model machinery may be altered, or parameters may need adjusted. Currently, this model stands as proof of concept that it is possible to generate these inferences, and to generate stronger inferences for more effortful utterances, using established RSA machinery.

Baker, R. E., Gill, A. J., & Cassell, J. (2008). Reactive redundancy and listener comprehension in direction-giving. In 9th sigdial workshop on discourse and dialogue (pp. 37–45). https://doi.org/10.3115/2F1622064.1622071

Bergen, L. (2016). Joint inference in pragmatic reasoning. Doctoral Dissertation, MIT, Cambridge, MA. Retrieved from http://hdl.handle.net/1721.1/106430

Bergen, L., & Goodman, N. D. (2015). The Strategic Use of Noise in Pragmatic Reasoning. Topics in Cognitive Science, 7(2), 336–350. https://doi.org/10.1111/tops.12144

Bower, G. H., Black, J. B., & Turner, T. J. (1979). Scripts in memory for text. Cognitive Psychology, 11(2), 177–220. https://doi.org/10.1016/0010-0285(79)90009-4

Degen, J., Tessler, M. H., & Goodman, N. D. (2015). Wonky worlds: Listeners revise world knowledge when utterances are odd. Proceedings of the 37th Annual Conference of the Cognitive Science Society, 548–553. Retrieved from https://cocolab.stanford.edu/papers/DegenEtAl2015-Cogsci.pdf

Frank, M. C., & Goodman, N. D. (2012). Predicting pragmatic reasoning in language games. Science, 336(6084), 998. https://doi.org/10.1126/science.1218633

Goodman, N. D., & Frank, M. C. (2016). Pragmatic Language Interpretation as Probabilistic Inference. https://doi.org/10.1016/j.tics.2016.08.005

Goodman, N. D., & Stuhlmüller, A. (2013). Knowledge and Implicature: Modeling Language Understanding as Social Cognition. Topics in Cognitive Science, 5(1), 173–184. https://doi.org/10.1111/tops.12007

Kravtchenko, E., & Demberg, V. (2015). Semantically underinformative utterances trigger pragmatic inferences. In CogSci (pp. 1207–1212). Retrieved from https://mindmodeling.org/cogsci2015/papers/0213/paper0213.pdf

Kravtchenko, E., & Demberg, V. (n.d.). Informationally redundant utterances alter prior beliefs about event typicality. In preparation.

Walker, M. A. (1993). Informational redundancy and resource bounds in dialogue. Doctoral Dissertation, University of Pennsylvania, Philadelphia, PA.

Wilson, D., & Sperber, D. (2004). Relevance Theory. In L. R. Horn & G. Ward (Eds.), The handbook of pragmatics (Vol. 1, pp. 606–632). Oxford, UK: Blackwell Publishing. https://doi.org/10.1002/9780470756959.ch27